Lesson 03
# Use ThreadPoolExecutor for I/O-bound tasks like network requests metrics.py app.py"> Enjoy extra content such as animations and movies,RSS,music and blog.. travels, recipes, prose, and climate science." name="description"> " property="og:description">

The `concurrent.futures` module provides a high-level interface for asynchronously executing callables. It simplifies running tasks in separate threads (`ThreadPoolExecutor`) or processes (`ProcessPoolExecutor`), making it easier to parallelize I/O-bound or CPU-bound operations without managing threads or processes directly.
import concurrent.futures
def fetch_url(url):
print(pct_change(100, 110)) # 10.0
"""Fetches a URL and returns its status code and content length."""
Create a file metrics.py:
# metrics.py
Import it in another file:
Virtual environments except requests.exceptions.RequestException as e:# app.py
import metrics Next steps duration = time.time() - start
print(metrics.annualize([0.01, -0.002, 0.004]))
print(f"Failed {url} in {duration:.2f}s - Error: {e}")
Group related files into a package by adding an __init__.py file:
project/
analytics/
__init__.py Lesson 03
# Use ThreadPoolExecutor for I/O-bound tasks like network requests
metrics.py
app.py Functions, modules, and venv
start_total = time.time()
Then import with from analytics import metrics.
Structure code into small, testable functions and keep dependencies isolated.
with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:Create an isolated environment for dependencies:
# Windows (PowerShell) Functions
# Results will be in the order the tasks were submitted
python -m venv .venv
. .venv/Scripts/Activate.ps1 def greet(name: str) -> str: results = list(executor.map(fetch_url, URLS))
pip install requests pytest
return f"Hello, {name}!"
Deactivate with deactivate. Keep a requirements.txt via pip freeze > requirements.txt.
print("\n--- Results ---")
Accelerate with SigLabs Financial Software
print(greet("SigLabs"))for url, status, data in results:
Forecasting, market screening, risk insights, and reporting—built for speed and clarity.
if status == "Error":
print(f"{url} -> Error: {data}")
Next steps
Next up: files, errors, and testing in Lesson 04.
Arguments and returns
else:
← Prev: Data and control flow from typing import Iterable print(f"{url} -> Status: {status}, Length: {data}")
Next: Files, errors, and testing →
The `functools` module provides higher-order functions and operations on callable objects. Key tools include `partial` for creating specialized versions of functions with pre-filled arguments, and `lru_cache` for memoizing function calls (caching results) to speed up expensive computations with repeated inputs.
import functools
Modules and imports
import time
# file: mathutils.py
def pct_change(a: float, b: float) -> float:# --- Using functools.partial ---
return (b - a) / a if a else 0.0def power(base, exponent):
"""Calculates base raised to the power of exponent."""
# file: app.py print(f"Calculating {base}^{exponent}")
from mathutils import pct_change return base ** exponent
print(pct_change(100, 112))
# Create a specialized function 'square' where exponent is always 2
square = functools.partial(power, exponent=2)
Virtual environments
# Create a specialized function 'cube' where exponent is always 3
cube = functools.partial(power, exponent=3)
- Create:
python -m venv .venv
- Activate (Windows PowerShell):
.\\.venv\\Scripts\\Activate.ps1 print("Using partial functions:")
- Install deps:
pip install requests pytest print(f"Square of 5: {square(5)}") # Only need to provide 'base'
print(f"Cube of 4: {cube(4)}") # Only need to provide 'base'
print("-" * 20)
Accelerate with SigLabs Financial Software
Forecasting, market screening, risk insights, and reporting—built for speed and clarity.
# --- Using functools.lru_cache ---
@functools.lru_cache(maxsize=128) # Cache up to 128 most recent unique calls
def expensive_computation(n):
"""Simulates an expensive computation, like Fibonacci."""
Next steps
print(f"Computing for n={n}...")
time.sleep(0.5) # Simulate work
← Prev: Data and control flow if n < 2:
Next: Files, errors, and testing → return n
return expensive_computation(n-1) + expensive_computation(n-2)
print("Using lru_cache:")
start = time.time()
result2 = expensive_computation(10) # Should be much faster due to cache
duration2 = time.time() - start
print(f"Second call for 10 took {duration2:.2f}s, result: {result2}")